Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hours#628
Open
Christopher-Lee-McClendon wants to merge 2 commits intoopenai:mainfrom
Conversation
- Non-record unlimited-compute submission: val_bpb=1.0983 (below 1.10) - 20000-step training (12000 peak-LR + 8000 warmdown) on 4xA100-40GB - Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22 - Legal score-first TTT (SGD, 10 epochs, momentum 0.9): -0.044 BPP gain - Float base 1.1153, artifact 14.29 MB (14,985,742 bytes)
- Finding 1: Warmdown is a first-class variable (6x late-plateau rate) - Finding 2: Better-trained models compress smaller - Finding 3: SGD >> AdamW for legal TTT (2.4x gain, same base) - Finding 4: Freeze-early-layers is active regularization - Finding 5: After right TTT family, invest in base model - What transfers to record track section - Open frontiers section
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
11L GEPA + 20k Steps + Pure Int6 + Legal TTT → 1.0983 BPP
Non-record unlimited-compute submission. Breaks 1.10 BPP with legal score-first TTT.
Key Numbers
Scaling Table
Research Contributions (5 transferable findings)
Warmdown is a first-class variable — The model plateaus at ~1.216 BPP during late peak-LR (steps 7k–12k, ~2 BPB/kstep), then warmdown delivers −0.101 BPB at 12.6 BPB/kstep — roughly 6× the late-plateau rate. Warmdown isn't cleanup; it's where most remaining gain originates once the plateau sets in.
Better-trained models compress smaller — 20k-step model → 14.29 MB (smallest artifact), despite identical architecture and quantization. Optimization quality improves weight compressibility, not just float loss.
SGD >> AdamW for legal TTT (controlled comparison) — On the same 5.2k-step, 24.6M-param base model: SGD+momentum delivers 2.4× the TTT gain of AdamW (−0.017 vs −0.007 float→final). Adam's moments can't converge in ~30 steps/chunk. Separately, the 20k GEPA model's −0.044 TTT gain is measured from a different baseline (quant→final) and different architecture, so should not be directly compared.
Freezing early layers is active regularization — Freezing 2 of 11 blocks (~18% of depth) during TTT isn't just catastrophic-forgetting defense. Early layers hold generic features; later layers are the better adaptation surface. Even though freezing removes parameters, the model adapts better.
After the right TTT family, invest in the base model — TTT's share of total gain over naive baseline shrinks from 22% (5.2k-step base) to 13% (20k-step base). The big jump came from choosing the right TTT regime (SGD + freeze + multi-epoch). After that, base model quality delivers more BPB per unit of effort than TTT micro-tuning.
What Transfers to Record Track
✅ Warmdown emphasis (≥40% of total steps)
✅ GPTQ-lite / pure int6
✅ SGD-based legal TTT (2.4× gain over AdamW, validated on same base)
✅ Freeze-early-blocks as TTT regularization
Open Frontiers
The local TTT recipe appears mostly saturated. Next questions are structural: stream vs. document-based adaptation, self-distillation at test time, quantization-aware TTT, and base-training scaling laws under fixed 16 MB budget.
Full analysis with all tables and derivations in README.md.
Prior Non-Record Submissions
Acknowledgments
Builds on techniques from: @signalrush (PR #414, GPTQ-lite/EMA), @jfprincz (PRs #287/#315, XSA/Partial RoPE/LN Scale), @unnir (PR #265, Efficient XSA), raahilshah (PR #162, SmearGate/BigramHash), @aruniyer (PR #86, Int6 QAT), samacqua (LoRA TTT), @abaybektursun (PR #549, LeakyReLU²), and the OpenAI baseline.